Goto

Collaborating Authors

 data breach



The Tea App Is Back With a New Website

WIRED

Months after major data leaks, the app where women leave Yelp-style reviews about men is relaunching with a new website. It's not back on iOS, but the Android app has new AI features. The embattled Tea app is back. Months after being removed from Apple's App Store in light of major data breaches, the app that allows women to share anonymous Yelp-style reviews of men is relaunching with a new website designed to help women "access dating guardrails without limitation," Tea's head of trust and safety Jessica Dees told WIRED. The app, which launched in 2023 and went viral last summer, getting to number 1 on the iOS App Store, lets users post photos of men while also pointing out red flags, such as if they are already partnered or registered sex offenders.


10M Americans hit in government contractor data breach

FOX News

This material may not be published, broadcast, rewritten, or redistributed. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Mutual Fund and ETF data provided by Refinitiv Lipper .


Incorporating AI Incident Reporting into Telecommunications Law and Policy: Insights from India

Agarwal, Avinash, Nene, Manisha J.

arXiv.org Artificial Intelligence

The integration of artificial intelligence (AI) into telecommunications infrastructure introduces novel risks, such as algorithmic bias and unpredictable system behavior, that fall outside the scope of traditional cybersecurity and data protection frameworks. This paper introduces a precise definition and a detailed typology of telecommunications AI incidents, establishing them as a distinct category of risk that extends beyond conventional cybersecurity and data protection breaches. It argues for their recognition as a distinct regulatory concern. Using India as a case study for jurisdictions that lack a horizontal AI law, the paper analyzes the country's key digital regulations. The analysis reveals that India's existing legal instruments, including the Telecommunications Act, 2023, the CERT-In Rules, and the Digital Personal Data Protection Act, 2023, focus on cybersecurity and data breaches, creating a significant regulatory gap for AI-specific operational incidents, such as performance degradation and algorithmic bias. The paper also examines structural barriers to disclosure and the limitations of existing AI incident repositories. Based on these findings, the paper proposes targeted policy recommendations centered on integrating AI incident reporting into India's existing telecom governance. Key proposals include mandating reporting for high-risk AI failures, designating an existing government body as a nodal agency to manage incident data, and developing standardized reporting frameworks. These recommendations aim to enhance regulatory clarity and strengthen long-term resilience, offering a pragmatic and replicable blueprint for other nations seeking to govern AI risks within their existing sectoral frameworks.


11 easy ways to protect your online privacy in 2025

FOX News

Tech expert Kurt Knutsson discusses tips on how to protect your data amid AI privacy concerns. Privacy is getting harder to protect in a world where everything is connected. Whether you're chatting with an AI, checking your email or using your smartphone, your personal information is constantly being collected, tracked and sometimes even sold. But protecting your privacy in 2025 doesn't have to be overwhelming. With a few practical steps, you can take back control of your data and make your online life safer.


Huge data breach sees 50,000 profiles LEAKED from 'Gay Daddy' dating app - exposing users' names, private photos, and HIV status

Daily Mail - Science & tech

A huge data breach has leaked over 50,000 profiles from the'Gay Daddy' dating app, cybersecurity researchers have discovered. The exposed data contains extremely sensitive information including users' names, ages, location data and HIV status. According to experts from Cybernews, the exposed database also contains over 124,000 private messages and photos – many of which are explicit. While the app markets itself as a'private and anonymous community', researchers say the information could be accessed by anyone with'basic technical knowledge'. Researchers say the app's'devastating' security failure puts its users at serious risk of blackmail, exploitation and even physical harm.


Urgent warning as 1.5 MILLION private photos are leaked from BDSM dating apps - so, have your sexy snaps been exposed?

Daily Mail - Science & tech

Cybersecurity researchers have issued an urgent warning as almost 1.5 million private photos from dating apps are exposed. Affected apps include the kink dating sites BDSM People and CHICA, as well as LGBT dating services PINK, BRISH, and TRANSLOVE - all of which were developed by M.A.D Mobile. The leaked files include photos used for verification, photos removed by app moderators, and photos sent in direct messages between users - many of which were explicit. These sensitive snaps were being stored online without password protection, meaning anyone with the link could view and download them. Researchers from Cybernews, who discovered the vulnerability, say this easily exploited security flaw put up to 900,000 users at risk of further hacks or extortion.


Amazon-hosted AI tool for UK military recruitment 'carries risk of data breach'

The Guardian

An artificial intelligence tool hosted by Amazon and designed to boost UK Ministry of Defence recruitment puts defence personnel at risk of being identified publicly, according to a government assessment. Data used in the automated system to improve the drafting of defence job adverts and attract more diverse candidates by improving the inclusiveness language, includes names, roles and emails of military personnel and is stored by Amazon in the US. This means "a data breach may have concerning consequences, ie identification of defence personnel", according to documents detailing government AI systems published for the first time today. The risk has been judged to be "low" and the MoD said "robust safeguards" have been put in place by the suppliers, Textio, Amazon Web Services and Amazon GuardDuty, a threat detection service. But it is one of several risks acknowledged by the government about its use of AI tools in the public sector in a tranche of documents released to improve transparency about the central government's use of algorithms. Official declarations about how the algorithms work stress that mitigations and safeguards are in place to tackle risks, as ministers push to use AI to boost UK economic productivity and, in the words of the technology secretary, Peter Kyle, on Tuesday, "bring public services back from the brink".


Passwords are giving way to better security methods – until those are hacked too, that is

The Guardian

We humans are simply too dumb to use passwords. A recent study from password manager NordPass found that "secret" was the most commonly used password in 2024. That was followed by "123456" and "password". So let's all give praise that the password is dying. Yes, we know that we should be using 20-letter passwords with weird symbols and numbers, but our minds can't cope.


STRisk: A Socio-Technical Approach to Assess Hacking Breaches Risk

Hammouchi, Hicham, Nejjari, Narjisse, Mezzour, Ghita, Ghogho, Mounir, Benbrahim, Houda

arXiv.org Artificial Intelligence

Data breaches have begun to take on new dimensions and their prediction is becoming of great importance to organizations. Prior work has addressed this issue mainly from a technical perspective and neglected other interfering aspects such as the social media dimension. To fill this gap, we propose STRisk which is a predictive system where we expand the scope of the prediction task by bringing into play the social media dimension. We study over 3800 US organizations including both victim and non-victim organizations. For each organization, we design a profile composed of a variety of externally measured technical indicators and social factors. In addition, to account for unreported incidents, we consider the non-victim sample to be noisy and propose a noise correction approach to correct mislabeled organizations. We then build several machine learning models to predict whether an organization is exposed to experience a hacking breach. By exploiting both technical and social features, we achieve a Area Under Curve (AUC) score exceeding 98%, which is 12% higher than the AUC achieved using only technical features. Furthermore, our feature importance analysis reveals that open ports and expired certificates are the best technical predictors, while spreadability and agreeability are the best social predictors.